202 research outputs found
Does Fundamental Indexation Lead to Better Risk Adjusted Returns? New Evidence from Australian Securities Exchange
Fundamental indexing based on accounting valuation has drawn significant interest from academics and practitioners in recent times as an alternative to capitalization weighted indexing based on market valuation. This paper investigates the claims of superiority of fundamental indexation strategy by using data for Australian Securities Exchange (ASX) listed stocks between 1985 and 2010. Not only do our results strongly support the outperformance claims observed in other geographical markets, we find that the excess returns from fundamental indexation in Australian market are actually much higher. The fundamental indexation strategy does underperform during strong bull markets although this effect diminishes with longer time horizons. On a rolling five years basis, the fundamental index always outperforms the capitalization-weighted index. Contrary to many previous studies, our results show that superior performance of fundamental indexation could not be attributed to value, size, or momentum effects. Overall, the findings indicate that fundamental indexation could offer potential outperformance of traditional indexation based on market capitalization even after adjusting for the former’s slightly higher turnover and transaction costs.
Highlighting objects of interest in an image by integrating saliency and depth
Stereo images have been captured primarily for 3D reconstruction in the past.
However, the depth information acquired from stereo can also be used along with
saliency to highlight certain objects in a scene. This approach can be used to
make still images more interesting to look at, and highlight objects of
interest in the scene. We introduce this novel direction in this paper, and
discuss the theoretical framework behind the approach. Even though we use depth
from stereo in this work, our approach is applicable to depth data acquired
from any sensor modality. Experimental results on both indoor and outdoor
scenes demonstrate the benefits of our algorithm
Entropy-difference based stereo error detection
Stereo depth estimation is error-prone; hence, effective error detection
methods are desirable. Most such existing methods depend on characteristics of
the stereo matching cost curve, making them unduly dependent on functional
details of the matching algorithm. As a remedy, we propose a novel error
detection approach based solely on the input image and its depth map. Our
assumption is that, entropy of any point on an image will be significantly
higher than the entropy of its corresponding point on the image's depth map. In
this paper, we propose a confidence measure, Entropy-Difference (ED) for stereo
depth estimates and a binary classification method to identify incorrect
depths. Experiments on the Middlebury dataset show the effectiveness of our
method. Our proposed stereo confidence measure outperforms 17 existing measures
in all aspects except occlusion detection. Established metrics such as
precision, accuracy, recall, and area-under-curve are used to demonstrate the
effectiveness of our method
Learning Distributions via Monte-Carlo Marginalization
We propose a novel method to learn intractable distributions from their
samples. The main idea is to use a parametric distribution model, such as a
Gaussian Mixture Model (GMM), to approximate intractable distributions by
minimizing the KL-divergence. Based on this idea, there are two challenges that
need to be addressed. First, the computational complexity of KL-divergence is
unacceptable when the dimensions of distributions increases. The Monte-Carlo
Marginalization (MCMarg) is proposed to address this issue. The second
challenge is the differentiability of the optimization process, since the
target distribution is intractable. We handle this problem by using Kernel
Density Estimation (KDE). The proposed approach is a powerful tool to learn
complex distributions and the entire process is differentiable. Thus, it can be
a better substitute of the variational inference in variational auto-encoders
(VAE). One strong evidence of the benefit of our method is that the
distributions learned by the proposed approach can generate better images even
based on a pre-trained VAE's decoder. Based on this point, we devise a
distribution learning auto-encoder which is better than VAE under the same
network architecture. Experiments on standard dataset and synthetic data
demonstrate the efficiency of the proposed approach
- …